#computer science interview
Explore tagged Tumblr posts
Note
do you have any tips for speaking to/reaching out to recruiters? i'm looking for new grad roles and ppl keep telling me to but i don't even know where to start or find any and all the articles online are so intimidating
Hiya 💗
The people are right, it's one of the best ways to put yourself out there! Oh, this is going to be a long one~!
I'll share tips from my own experience, this might help you, this might because I did it any other way but this:
I applied to a bunch of jobs: like for 5 days straight I was just apply just for the sake of it. The more jobs, the more recruiters have my CV/Resume in their database. I recommend LinkedIn the most as it's super easy to drop a message to the recruiter.
Applied to jobs that I had 50%+ chance of getting a call to: Obviously this means apply to jobs where you have the skills and the experience (work or in building projects etc). I say this because say they do call you but they ask you if you have this certain tech stack and you say no... end of call really. So, for me, I had like 2 or 3 things they were asking for in a candidate so I got through to the calling stage!
LinkedIn is actually your friend, don't be afraid: During my random job searching and whilst I was in my job, I had recruiters message me about job opportunities. Why? Because of my profile. You need to have your LinkedIn vamped up, check mine out for reference (click the LinkedIn icon). But make sure to have your skills e.g. About > Top skills, your work experience (paid or volunteer) and any certificates you have! If you're brave, not like me, start posting on there for a while.
LinkedIn again but Connections: Oh my days please follow people, even if you don't know them personally. I have 300+ connections (not to brag) but I only know like two handful of the people, the rest are of people who connect with people who I follow, I have met like 5 recruiters through this way.
Actually message the recruiters???: Okay so you followed the people, your CV/Resume is done and dusted and now you're ready to message those recruiters! In my case, I had more recruiters message me than the other way round only because I'm shy hehe so I wait for them to make the first move. They would probably send a whole message about the new job that have posted and see if you're interested and then, if you like the job, you can say "Yes please" or whatever is the appropriate reply is, and then they will send further information or arrange a phone call! If you want to message them first, I would find them more after applying for a job on LinkedIn, they usually add the recruiter in the job posting as a way for people to message them.
DO NOT FEEL AFRAID IN MESSAGING RECRUITERS: I say this because a) imagine 100 people apply for the job, only 5 would message the recruiter (I don't know if the stats are right, I just remembered that from bootcamp-) because everyone else is too afraid to do it! Missed opportunity! b) recruiters actually want people to message them. Now in terms of what to write to them? I don't know really. I would always go for the classic "don't repeat what's on your resume", they're going to read it anyways, so just talk a bit about your experience and skills A BIT like
"Hello/Hi, my name is [name]. I hope this message finds you well. I wanted to express my strong interest in the [Job Title] position at [Company Name], for which I recently submitted my application. I'm enthusiastic about the opportunity to join [Company Name] and contribute to [mention something specific you find appealing about the company or role, if possible]. I believe my skills and experience align well with the requirements of the position. Thank you for considering my application, and I look forward to the possibility of discussing my qualifications in more detail. Best regards, [name]
No hire, don't be sad: Even if they don't hire you, or go through the next stages, keep in touch by even asking questions about what's on the market/what's new, how you could do better for other jobs you want to apply to that were similar to the ones you failed at. They could point you to the right direction! One recruiter said she wanted me to have more projects I was passionate about online like on GitHub or GitLab, even if they were "silly" projects - at the time, I didn't have much projects online so it made sense! See, I took that advice and now I'm a project making machine (a bit)! Advice they give sticks forever!
Remember, reaching out to recruiters and applying for jobs is a numbers game. The more applications you submit, the better your chances of landing interviews. I really hope this helps and I didn't make too many spelling mistakes! This is all of the things I could note down from the top of my head!
I've made other posts on on my coding blog about career advices:
🌐 Tips for Landing Your First Entry-Level Developer Job
🌐 Career Services For Web Dev (could be useful to you too!)
🌐 The Talent Cloud Community: Careers Workshop
Good luck with your job search!
⤷ ♡ my shop ○ my mini website ○ pinned ○ navigation ♡
#resources#my asks#codeblr#coding#progblr#programming#studying#studyblr#learn to code#comp sci#tech#programmer#career advice#career#job interview#first job#jobsearch#computer science
138 notes
·
View notes
Text
I'm not an extrovert. At all. In everyday life, I'm a yapper, sure, but I need someone to first assure me I am okay to yap, so I don't start conversations, even when I really want to join in sometimes! It's just the social anxiety acting up. God knows where from and why I lose a lot of my inhibitions when it comes to talking to people about music. I don't know where the confidence has suddenly sprung from. I've made a crazy amount of friends in musical circles, either just talking to people about common music or (since it is after all in music circles) talking to bands about their own music. I let out a sigh of relief any time an interaction goes well, because in truth it's going against my every instinct. I wish I could do that in everyday life
#like that's the point where we need to remind everyone around me that as much as I say#radio is 'a job'-- it's not 'my job' lol. I wish I was this interested in data science#but like. Honestly?? I'm not even a data scientist!? I answered a few questions about classical AI having come from a computer science back#background and now people are saying to me 'I know you're a data scientist and not a programmer' sir I am a computer scientist#what are you on about#and like I guess I get to google things and they're paying me so I'm not complaining but like I am not a data scientist#my biggest data scientist moment was when I asked 'do things in data science ever make sense???' and a bunch of data scientists went#'no :) Welcome to the club' ???????#why did I do a whole ass computer science degree then. Does anyone at all even want that anymore. Has everything in the realm of#computer science just been Solved. What of all the problems I learned and researched about. Which were cool. Are they just dead#Ugh the worst thing the AI hype has done rn is it has genuinely required everyone to pretend they're a data scientist#even MORE than before. I hate this#anyway; I wish I didn't hate it and I was curious and talked to many people in the field#like it's tragicomedy when every person I meet in music is like 'you've got to pursue this man you're a great interviewer blah blah blah'#and like I appreciate that this is coming from people who themselves have/are taking a chance on life#but. I kinda feel like my career does not exist anymore realistically so unless 1) commercial radio gets less shitty FAST#2) media companies that are laying off 50% of their staff miraculously stop or 3) Tom Power is suddenly feeling generous and wants#a completely unknown idiot to step into the biggest fucking culture show in the country (that I am in no way qualified for)#yeah there's very very little else. There's nothing else lol#Our country does not hype. They don't really care for who you are. f you make a decent connection with them musically they will come to you#Canada does not make heroes out of its talent. They will not be putting money into any of that. Greenlight in your dreams.#this is something I've been told (and seen) multiple times. We'll see it next week-- there are Olympic medallists returning to uni next wee#no one cares: the phrase is 'America makes celebrities out of their sportspeople'; we do not. Replace sportspeople with any public professi#Canada does not care for press about their musicians. The only reason NME sold here was because Anglophilia not because of music journalism#anyway; personal
10 notes
·
View notes
Text
The Mathematical Foundations of Machine Learning
In the world of artificial intelligence, machine learning is a crucial component that enables computers to learn from data and improve their performance over time. However, the math behind machine learning is often shrouded in mystery, even for those who work with it every day. Anil Ananthaswami, author of the book "Why Machines Learn," sheds light on the elegant mathematics that underlies modern AI, and his journey is a fascinating one.
Ananthaswami's interest in machine learning began when he started writing about it as a science journalist. His software engineering background sparked a desire to understand the technology from the ground up, leading him to teach himself coding and build simple machine learning systems. This exploration eventually led him to appreciate the mathematical principles that underlie modern AI. As Ananthaswami notes, "I was amazed by the beauty and elegance of the math behind machine learning."
Ananthaswami highlights the elegance of machine learning mathematics, which goes beyond the commonly known subfields of calculus, linear algebra, probability, and statistics. He points to specific theorems and proofs, such as the 1959 proof related to artificial neural networks, as examples of the beauty and elegance of machine learning mathematics. For instance, the concept of gradient descent, a fundamental algorithm used in machine learning, is a powerful example of how math can be used to optimize model parameters.
Ananthaswami emphasizes the need for a broader understanding of machine learning among non-experts, including science communicators, journalists, policymakers, and users of the technology. He believes that only when we understand the math behind machine learning can we critically evaluate its capabilities and limitations. This is crucial in today's world, where AI is increasingly being used in various applications, from healthcare to finance.
A deeper understanding of machine learning mathematics has significant implications for society. It can help us to evaluate AI systems more effectively, develop more transparent and explainable AI systems, and address AI bias and ensure fairness in decision-making. As Ananthaswami notes, "The math behind machine learning is not just a tool, but a way of thinking that can help us create more intelligent and more human-like machines."
The Elegant Math Behind Machine Learning (Machine Learning Street Talk, November 2024)
youtube
Matrices are used to organize and process complex data, such as images, text, and user interactions, making them a cornerstone in applications like Deep Learning (e.g., neural networks), Computer Vision (e.g., image recognition), Natural Language Processing (e.g., language translation), and Recommendation Systems (e.g., personalized suggestions). To leverage matrices effectively, AI relies on key mathematical concepts like Matrix Factorization (for dimension reduction), Eigendecomposition (for stability analysis), Orthogonality (for efficient transformations), and Sparse Matrices (for optimized computation).
The Applications of Matrices - What I wish my teachers told me way earlier (Zach Star, October 2019)
youtube
Transformers are a type of neural network architecture introduced in 2017 by Vaswani et al. in the paper “Attention Is All You Need”. They revolutionized the field of NLP by outperforming traditional recurrent neural network (RNN) and convolutional neural network (CNN) architectures in sequence-to-sequence tasks. The primary innovation of transformers is the self-attention mechanism, which allows the model to weigh the importance of different words in the input data irrespective of their positions in the sentence. This is particularly useful for capturing long-range dependencies in text, which was a challenge for RNNs due to vanishing gradients. Transformers have become the standard for machine translation tasks, offering state-of-the-art results in translating between languages. They are used for both abstractive and extractive summarization, generating concise summaries of long documents. Transformers help in understanding the context of questions and identifying relevant answers from a given text. By analyzing the context and nuances of language, transformers can accurately determine the sentiment behind text. While initially designed for sequential data, variants of transformers (e.g., Vision Transformers, ViT) have been successfully applied to image recognition tasks, treating images as sequences of patches. Transformers are used to improve the accuracy of speech-to-text systems by better modeling the sequential nature of audio data. The self-attention mechanism can be beneficial for understanding patterns in time series data, leading to more accurate forecasts.
Attention is all you need (Umar Hamil, May 2023)
youtube
Geometric deep learning is a subfield of deep learning that focuses on the study of geometric structures and their representation in data. This field has gained significant attention in recent years.
Michael Bronstein: Geometric Deep Learning (MLSS Kraków, December 2023)
youtube
Traditional Geometric Deep Learning, while powerful, often relies on the assumption of smooth geometric structures. However, real-world data frequently resides in non-manifold spaces where such assumptions are violated. Topology, with its focus on the preservation of proximity and connectivity, offers a more robust framework for analyzing these complex spaces. The inherent robustness of topological properties against noise further solidifies the rationale for integrating topology into deep learning paradigms.
Cristian Bodnar: Topological Message Passing (Michael Bronstein, August 2022)
youtube
Sunday, November 3, 2024
#machine learning#artificial intelligence#mathematics#computer science#deep learning#neural networks#algorithms#data science#statistics#programming#interview#ai assisted writing#machine art#Youtube#lecture
4 notes
·
View notes
Text
got my second uni offer!!!!!!!!!
#this one’s local for computer science#and UNCONDITIONAL WOOOO#just need to interview for the scholarship 😭😭😭
8 notes
·
View notes
Text
#job#jobs#jobsearch#best jobs#job interview#career#lucknow#jobs from home#artificial intelligence#placement engineering colleges in bangalore#engineering college#engineering student#engineering services#engineering solutions#engineering projects#computer science#education#course#technology#research scientist#online jobs#fresher jobs#jobseekers#remote jobs#employment#part time jobs#job search#careers#interview tips#interview with the vampire
2 notes
·
View notes
Text
Get ready to re-enter the Grid! Jeff Bridges confirmed on Facebook – he’s officially back for Tron: Ares. You know, the guy who made Kevin Flynn so unforgettable in the original 1982 movie and its sequel, Tron: Legacy?
Remember when Tron first hit theaters? It was like nothing anyone had seen before. No widespread internet, video games were still finding their footing…and this movie drops with crazy visuals and a mind-bending story about a world inside computers. It didn’t rake in cash right away, but over time? Tron became a total legend, paving the way for a whole new wave of filmmaking.
We’ve been lucky enough to chat with Jeff Bridges over the years, and naturally, we had to ask: what’s the secret behind Tron’s lasting appeal?
#Tron: Ares#Tron#Tron: Legacy#Jeff Bridges#Kevin Flynn#Disney#science fiction#sequel#1980s films#cult classic#behind the scenes#interview#visual effects#computer-generated imagery#film history#Movies#Movie News#Entertainment#Entertainment news#Celebrities#Celebrity#celebrity news#celebrity interviews
2 notes
·
View notes
Text
Consistently being humbled this week. I was so pissed at how much work I'm being assigned. Feeling like I won't find the time to do it and then? My morning class on Thursday was cancelled/turned into an independent study day. Like damn. I love going to that class! But I really needed a minute to catch up on some grading/reading for another class. I also got an A on my Syntax midterm and I wanna keep that momentum going 😤

(I really be mean mugging in the mornings lol)
#so far I've got#interviews to do for my anthro class#group presentation to plan for ancient civilization/literacy#extra grading for computer science because one of the members of our team is out this week#but now I can take my time with some of these
2 notes
·
View notes
Text
AI: An ancient nightmare?
Artificial intelligence’s development seems to be moving at breakneck speed, and the ability of AI to automate even complex tasks – and, potentially, to outwit its human creators – has been making plenty of headlines in recent months. But how far back does our fascination with, and our fear of, AI extend? Matt Elton spoke to Michael Wooldridge, professor of computer science at the University of Oxford, to find out more.

#AI: An ancient nightmare#history extra#history extra podcast#BBC#podcast#history#new books#books and reading#author interview#podcasts#science#science history#AI#machine learning#Michael Wooldridge#computer science
3 notes
·
View notes
Text
Last night I wrote a 1000 word paper I’d been procrastinating for over a week. Today I have to do a whole report on how the Franco-Prussian war influenced art at the time. I am shit at this school stuff
#procrastination#so sad#also I chose both fucking topics#like I did an interview for the 1000 word report#and have an art component to do next week#and I chose the franco-Prussian arts thing#I am not an art major OR a history major#I’m a fucking polisci major#and my best grade is my computer science class#wtf#comp sci#history#leave me alone
0 notes
Text
gods I wish I can reblog without notifying the OP because I truly don't get why the Iron Widow series author even bothered to write in Di Renjie, Wu Zetian's historical bestie, when they clearly don't even give enough of a shit about him to have thoughts about what he'll do in a college AU
#my best guess is uhh. probably law or political science#iirc he did like doing governance stuff so why not#may have picked up a computer science or math minor after randomly finding a coding interview logic puzzle guide book and liking it#he has def read the entire agatha christie bookography too
0 notes
Text
3 Questions: Claire Wang on training the brain for memory sports
New Post has been published on https://thedigitalinsider.com/3-questions-claire-wang-on-training-the-brain-for-memory-sports/
3 Questions: Claire Wang on training the brain for memory sports


On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.
The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.
The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.
MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.
Q: How did you come to get involved in memory competitions?
A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.
Q: What was the Tournament of Memory Champions like?
A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.
Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.
Since I hadn’t done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.
Q: What is your approach to improving memory?
A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.
Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.
The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.
#Advice#amazing#anniversary#approach#book#Brain#Brain and cognitive sciences#brain research#Community#competition#Competitions#computer#Computer Science#Contests and academic competitions#double#Electrical engineering and computer science (EECS)#emotions#engineering#event#Events#Faculty#gap#how#how to#images#intelligence#INterview#it#Learn#learning
0 notes
Text
#remote jobs#employment#mba#mbastudent#placement engineering colleges in bangalore#internship#jobseekers#fresher jobs#online jobs#hr solutions#engineering college#engineering student#engineering services#engineering projects#engineering solutions#computer science#education#research scientist#course#technology#best jobs#jobs#jobs from home#jobsearch#job search#part time jobs#careers#job interview#career company#career advice
1 note
·
View note
Text
Cracking the Java interview code! 🔍 Let’s talk interfaces: Know your variables (public, static, final) and methods (public, abstract),🧑💻 plus the exciting additions post-Java 8 and 9.💡 Stay ahead with ThinkQuotient! 💼
#developers & startups#education#information technology#software#student#technology#interview#science#javaprogramming#javascript#javatpoint#javajunkie#coding#computerengineering#programming#user interface#software development#cloud computing#ThinkQuotient
0 notes
Text
Just survived my first technical interview…
I didn’t know it was going to be a technical interview. They didn’t say it was and when I told my mom I was worried it might be, she was like, “nah, no way, they’re just going to ask you about your experience.” It was a freaking technical interview.
They gave me two minutes to plan a five minute lesson on recursion and then had me “teach” it. They also had me do the Fizzbuzz test (easy) and program a class (I forgot the syntax for the constructor because ✨ anxiety ✨ but I think I guessed correctly). This was just…super unexpected. The questions weren’t difficult, but I hate getting out on the spot and it was terrifying.
*Cries in first-year compsci major.*
0 notes
Text
#medium#code#interview#quesitons#answers#javascript#codeblr#studyblr#computer#science#cs#compsci#quiz#study#guide
1 note
·
View note
Text
"Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.
Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.
The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.
In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.
“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”
Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.
But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.
The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.
No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”
Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.
In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions."
3K notes
·
View notes